Goto

Collaborating Authors

 sampling-decomposable generative adversarial recommender


Sampling-Decomposable Generative Adversarial Recommender

Neural Information Processing Systems

Being often trained on implicit user feedback, many recommenders suffer from the sparsity challenge due to the lack of explicitly negative samples. The GAN-style recommenders (i.e., IRGAN) addresses the challenge by learning


Supplementary Material for the Paper " Sampling-Decomposable Generative Adversarial Recommender "

Neural Information Processing Systems

In the appendix, we start from the proofs of theorem 2.1 and theorem 2.2 in section A. Then, we prove the correctness of proposition 2.2 and proposition 2.3 in section B. After that, the detailed derivation of our proposed loss is provided in section C. At last, the sensitivity of some important Before providing the proofs of the theorems, we restate some important notations first. Here, we also restate some important notations first. Here, we illustrate the detailed derivation of our approximated loss for learning the discriminator. Figure 1(a) demonstrates the effects of the embeddings size (i.e., Figure 1(b) shows the effects of the number of item sample set for learning the discriminator. Figure 1(c) reports the effects of the number of item and context sample set for learning the generator.

  artificial intelligence, exp, sampling-decomposable generative adversarial recommender, (13 more...)
  Country:

Review for NeurIPS paper: Sampling-Decomposable Generative Adversarial Recommender

Neural Information Processing Systems

Summary and Contributions: This paper analyzed well-known GAN based information retrieval framework IRGAN in the recommendation setting. It proposed multiple interesting modifications that significantly improve its training efficiency and scalability for recommendation tasks. Specifically, the paper first pointed out two problems of IRGAN: (1) simple GAN objective could cause the optimal negative sampler biases to extreme cases (delta distributions), (2) Sampling from the optimal negative sampler is computationally expensive. For addressing (1), the paper proposed to add an entropy regularization that smooth the negative sampler distribution (optimal). For addressing (2), the paper suggested using self-normalized important sampling to approximate optimal negative sampler found in (1), where sampling from proposed distribution could be decomposed into two-step categorical sampling. Further, the paper described a strategy for learning proposed distribution by minimizing estimation variance through a constrained optimization.


Sampling-Decomposable Generative Adversarial Recommender

Jin, Binbin, Lian, Defu, Liu, Zheng, Liu, Qi, Ma, Jianhui, Xie, Xing, Chen, Enhong

arXiv.org Artificial Intelligence

Recommendation techniques are important approaches for alleviating information overload. Being often trained on implicit user feedback, many recommenders suffer from the sparsity challenge due to the lack of explicitly negative samples. The GAN-style recommenders (i.e., IRGAN) addresses the challenge by learning a generator and a discriminator adversarially, such that the generator produces increasingly difficult samples for the discriminator to accelerate optimizing the discrimination objective. However, producing samples from the generator is very time-consuming, and our empirical study shows that the discriminator performs poor in top-k item recommendation. To this end, a theoretical analysis is made for the GAN-style algorithms, showing that the generator of limit capacity is diverged from the optimal generator. This may interpret the limitation of discriminator's performance. Based on these findings, we propose a Sampling-Decomposable Generative Adversarial Recommender (SD-GAR). In the framework, the divergence between some generator and the optimum is compensated by self-normalized importance sampling; the efficiency of sample generation is improved with a sampling-decomposable generator, such that each sample can be generated in O(1) with the Vose-Alias method. Interestingly, due to decomposability of sampling, the generator can be optimized with the closed-form solutions in an alternating manner, being different from policy gradient in the GAN-style algorithms. We extensively evaluate the proposed algorithm with five real-world recommendation datasets. The results show that SD-GAR outperforms IRGAN by 12.4% and the SOTA recommender by 10% on average. Moreover, discriminator training can be 20x faster on the dataset with more than 120K items.